feature extraction using alexnet Search Results


90
MathWorks Inc command activations for alexnet
Command Activations For Alexnet, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/command activations for alexnet/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
command activations for alexnet - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
Kaggle Inc penultimate features from alexnet
Our approach identifies the most salient regions in different classes for image classification using <t>AlexNet.</t> From top to bottom: original image, MARGIN’s explanation overlaid on the image, and Grad-CAM’s explanation. Note our approach yields highly specific, and sparse explanations from different regions in the image for a given class.
Penultimate Features From Alexnet, supplied by Kaggle Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/penultimate features from alexnet/product/Kaggle Inc
Average 90 stars, based on 1 article reviews
penultimate features from alexnet - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
Kaggle Inc deepsea
Our approach identifies the most salient regions in different classes for image classification using <t>AlexNet.</t> From top to bottom: original image, MARGIN’s explanation overlaid on the image, and Grad-CAM’s explanation. Note our approach yields highly specific, and sparse explanations from different regions in the image for a given class.
Deepsea, supplied by Kaggle Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/deepsea/product/Kaggle Inc
Average 90 stars, based on 1 article reviews
deepsea - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
SoftMax Inc alexnet (feature extraction)
<t>AlexNet</t> Architecture used for COVID-19 classification .
Alexnet (Feature Extraction), supplied by SoftMax Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/alexnet (feature extraction)/product/SoftMax Inc
Average 90 stars, based on 1 article reviews
alexnet (feature extraction) - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
MathWorks Inc command activations for emotionnet
A. ANN models of the primate ventral stream (typically comprising V1, V2, V4 and IT like layers) can be trained to predict human facial emotion judgments. This involves building a regression model, i.e., determining the weights based on the model layer activations (as the predictor) to predict the image ground truth (“level of happiness”) on a set of training images, and then testing the predictions of this model on held-out images. B. An ANN model’s predicted psychometric curves (e.g., <t>AlexNet,</t> shown here) show the proportion of trials judged as “happy” as a function of facial emotion morph levels ranging from 0% happy (100% fearful; left) to 100% happy (0% fearful; right). This curve demonstrates that activations of ANN layers (layer ‘fc7’ that corresponds to the “model-IT” layer) can be successfully trained to predict facial emotions. C. Comparison of ANN’s image-level behavioral patterns with the behavior measured in Controls (x-axis) and IwA (y-axis). Four ANNs (with 5 models each generated from different layers of the ANNs are shown here in different colors. ANN predictions better match the behavior measured in the Controls compared to IwA. The correlation values (x and y axes) were corrected by the noise estimates per human population so that the differences are not due to differences in noise-levels in measurements across the IwA and Control subject pools. The dot size refers to the degree of discrepancy between ANN predictivity of Controls vs. IwA. D . A comparison of the ANN predictivity (results from AlexNet shown here) of behavior measured in IwA vs. Controls as function of model layers (convolutional (cnv) layers 1,3,4, and 5 and the fully connected layer 7, ‘fc7’ -- that approximately corresponds to the ventral stream cortical hierarchy). The difference between the ANN’s predictivity of behavior in IwA and Controls increases with depth and is referred to as Δ . E. Discriminability index (d’; ability to discriminate between image-level behavioral patterns measured in IwA vs. Controls ; see Methods) as a function of model layers (all four tested models shown separately in individual panels). The difference in ANN predictivity between Controls and IwA was largest at the deeper (more IT-like) layers of the models instead of earlier (more V1, V2, and V4-like) layers. Errorbars denote bootstrap confidence intervals. Facial images shown in this figure are morphed and processed version of the original face images. These images have full re-use permission.
Command Activations For Emotionnet, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/command activations for emotionnet/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
command activations for emotionnet - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
SoftMax Inc resnet-50+softmax
A. ANN models of the primate ventral stream (typically comprising V1, V2, V4 and IT like layers) can be trained to predict human facial emotion judgments. This involves building a regression model, i.e., determining the weights based on the model layer activations (as the predictor) to predict the image ground truth (“level of happiness”) on a set of training images, and then testing the predictions of this model on held-out images. B. An ANN model’s predicted psychometric curves (e.g., <t>AlexNet,</t> shown here) show the proportion of trials judged as “happy” as a function of facial emotion morph levels ranging from 0% happy (100% fearful; left) to 100% happy (0% fearful; right). This curve demonstrates that activations of ANN layers (layer ‘fc7’ that corresponds to the “model-IT” layer) can be successfully trained to predict facial emotions. C. Comparison of ANN’s image-level behavioral patterns with the behavior measured in Controls (x-axis) and IwA (y-axis). Four ANNs (with 5 models each generated from different layers of the ANNs are shown here in different colors. ANN predictions better match the behavior measured in the Controls compared to IwA. The correlation values (x and y axes) were corrected by the noise estimates per human population so that the differences are not due to differences in noise-levels in measurements across the IwA and Control subject pools. The dot size refers to the degree of discrepancy between ANN predictivity of Controls vs. IwA. D . A comparison of the ANN predictivity (results from AlexNet shown here) of behavior measured in IwA vs. Controls as function of model layers (convolutional (cnv) layers 1,3,4, and 5 and the fully connected layer 7, ‘fc7’ -- that approximately corresponds to the ventral stream cortical hierarchy). The difference between the ANN’s predictivity of behavior in IwA and Controls increases with depth and is referred to as Δ . E. Discriminability index (d’; ability to discriminate between image-level behavioral patterns measured in IwA vs. Controls ; see Methods) as a function of model layers (all four tested models shown separately in individual panels). The difference in ANN predictivity between Controls and IwA was largest at the deeper (more IT-like) layers of the models instead of earlier (more V1, V2, and V4-like) layers. Errorbars denote bootstrap confidence intervals. Facial images shown in this figure are morphed and processed version of the original face images. These images have full re-use permission.
Resnet 50+Softmax, supplied by SoftMax Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/resnet-50+softmax/product/SoftMax Inc
Average 90 stars, based on 1 article reviews
resnet-50+softmax - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
Kaggle Inc alexnet
Studies using Deep Learning methods for detection of SCZ.
Alexnet, supplied by Kaggle Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/alexnet/product/Kaggle Inc
Average 90 stars, based on 1 article reviews
alexnet - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
Rocha labs alexnet
Review of existing leaf disease methodologies with limitations.
Alexnet, supplied by Rocha labs, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/alexnet/product/Rocha labs
Average 90 stars, based on 1 article reviews
alexnet - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
Hinton labs alexnet
Review of existing leaf disease methodologies with limitations.
Alexnet, supplied by Hinton labs, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/alexnet/product/Hinton labs
Average 90 stars, based on 1 article reviews
alexnet - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
MathWorks Inc alexnet network
Convolutional Neural network computation of saliency maps. A model of artificial vision was used to predict the most salient areas in an image and test whether attention maps derived from digit-tracking and eye-tracking exploration data are sensitive to the same features in visual scenes. a Convolutional Neural Network architecture. The first five convolutional layers of the <t>AlexNet</t> Network were used for features map extraction, and features are linearly combined in the last layer to produce saliency maps (e.g., Fig. ). b Hierarchical ordering of learned weights in the last layer of the convolutional neural network (CNN). X -axis denotes the 256 outputs while the Y -axis denotes the mean Pearson correlation between an individual channel and the measured saliency map. Each channel can be seen as a saliency map sensitive to a single feature class in the picture. A strong positive correlation coefficient indicates a highly attractive feature while a strong negative correlation indicates a highly avoided feature. c Correlation between the weights learned using eye and digit-tracking (Set A r pearson = 0.95, p < 1 × 10 −128 –Set B r pearson = 0.96, p < 1 × 10 −147 ). d High-level features are visualized by identifying in the picture database the most responsive pixels for the considered CNN channel. Example of the most attractive and the most avoided features corresponding to, respectively, the 3 most positively correlated and the 3 most negatively correlated channels of the CNN. Human explorations are particularly sensitive to eyes or eye-like areas, faces and highly contrasted details, while uniform areas with natural colors, textures, repetitive symbols are generally avoided.
Alexnet Network, supplied by MathWorks Inc, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/alexnet network/product/MathWorks Inc
Average 90 stars, based on 1 article reviews
alexnet network - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

90
EyePACS LLC alexnet
Summary of related work.
Alexnet, supplied by EyePACS LLC, used in various techniques. Bioz Stars score: 90/100, based on 1 PubMed citations. ZERO BIAS - scores, article reviews, protocol conditions and more
https://www.bioz.com/result/alexnet/product/EyePACS LLC
Average 90 stars, based on 1 article reviews
alexnet - by Bioz Stars, 2026-05
90/100 stars
  Buy from Supplier

Image Search Results


Our approach identifies the most salient regions in different classes for image classification using AlexNet. From top to bottom: original image, MARGIN’s explanation overlaid on the image, and Grad-CAM’s explanation. Note our approach yields highly specific, and sparse explanations from different regions in the image for a given class.

Journal: Frontiers in Big Data

Article Title: MARGIN: Uncovering Deep Neural Networks Using Graph Signal Analysis

doi: 10.3389/fdata.2021.589417

Figure Lengend Snippet: Our approach identifies the most salient regions in different classes for image classification using AlexNet. From top to bottom: original image, MARGIN’s explanation overlaid on the image, and Grad-CAM’s explanation. Note our approach yields highly specific, and sparse explanations from different regions in the image for a given class.

Article Snippet: For Kaggle, we use penultimate features from AlexNet in order to construct a neighborhood graph.

Techniques:

AlexNet Architecture used for COVID-19 classification .

Journal: Ipem-Translation

Article Title: Towards smart diagnostic methods for COVID-19: Review of deep learning for medical imaging

doi: 10.1016/j.ipemt.2022.100008

Figure Lengend Snippet: AlexNet Architecture used for COVID-19 classification .

Article Snippet: , X-ray , Combination of Two Different DBs , 105 COVID-19, 11 SARS, 80 Normal Img. , 70%:30% , Hold-out , ImageNet , Supervised , Data Augmentation, Histogram, Feature Extraction using AlexNet, PCA, K-means , COVID-19 from Other Pneumonia , NA , DeTraC (Based on ResNet18), AlexNet (Feature Extraction) , Softmax , Composition Phase , Acc=95.12, Sen=97.91, Spe=91.87.

Techniques:

A combined architecture of AlexNet, SqueezeNet, GoogleNet, and MobileNetV2 for COVID-19 detection .

Journal: Ipem-Translation

Article Title: Towards smart diagnostic methods for COVID-19: Review of deep learning for medical imaging

doi: 10.1016/j.ipemt.2022.100008

Figure Lengend Snippet: A combined architecture of AlexNet, SqueezeNet, GoogleNet, and MobileNetV2 for COVID-19 detection .

Article Snippet: , X-ray , Combination of Two Different DBs , 105 COVID-19, 11 SARS, 80 Normal Img. , 70%:30% , Hold-out , ImageNet , Supervised , Data Augmentation, Histogram, Feature Extraction using AlexNet, PCA, K-means , COVID-19 from Other Pneumonia , NA , DeTraC (Based on ResNet18), AlexNet (Feature Extraction) , Softmax , Composition Phase , Acc=95.12, Sen=97.91, Spe=91.87.

Techniques:

Summary of recent DL techniques for COVID-19 diagnosis.

Journal: Ipem-Translation

Article Title: Towards smart diagnostic methods for COVID-19: Review of deep learning for medical imaging

doi: 10.1016/j.ipemt.2022.100008

Figure Lengend Snippet: Summary of recent DL techniques for COVID-19 diagnosis.

Article Snippet: , X-ray , Combination of Two Different DBs , 105 COVID-19, 11 SARS, 80 Normal Img. , 70%:30% , Hold-out , ImageNet , Supervised , Data Augmentation, Histogram, Feature Extraction using AlexNet, PCA, K-means , COVID-19 from Other Pneumonia , NA , DeTraC (Based on ResNet18), AlexNet (Feature Extraction) , Softmax , Composition Phase , Acc=95.12, Sen=97.91, Spe=91.87.

Techniques: Biomarker Discovery, Extraction, Activation Assay, Bacteria, Preserving, Shear, Infection, Sampling, Blocking Assay

A. ANN models of the primate ventral stream (typically comprising V1, V2, V4 and IT like layers) can be trained to predict human facial emotion judgments. This involves building a regression model, i.e., determining the weights based on the model layer activations (as the predictor) to predict the image ground truth (“level of happiness”) on a set of training images, and then testing the predictions of this model on held-out images. B. An ANN model’s predicted psychometric curves (e.g., AlexNet, shown here) show the proportion of trials judged as “happy” as a function of facial emotion morph levels ranging from 0% happy (100% fearful; left) to 100% happy (0% fearful; right). This curve demonstrates that activations of ANN layers (layer ‘fc7’ that corresponds to the “model-IT” layer) can be successfully trained to predict facial emotions. C. Comparison of ANN’s image-level behavioral patterns with the behavior measured in Controls (x-axis) and IwA (y-axis). Four ANNs (with 5 models each generated from different layers of the ANNs are shown here in different colors. ANN predictions better match the behavior measured in the Controls compared to IwA. The correlation values (x and y axes) were corrected by the noise estimates per human population so that the differences are not due to differences in noise-levels in measurements across the IwA and Control subject pools. The dot size refers to the degree of discrepancy between ANN predictivity of Controls vs. IwA. D . A comparison of the ANN predictivity (results from AlexNet shown here) of behavior measured in IwA vs. Controls as function of model layers (convolutional (cnv) layers 1,3,4, and 5 and the fully connected layer 7, ‘fc7’ -- that approximately corresponds to the ventral stream cortical hierarchy). The difference between the ANN’s predictivity of behavior in IwA and Controls increases with depth and is referred to as Δ . E. Discriminability index (d’; ability to discriminate between image-level behavioral patterns measured in IwA vs. Controls ; see Methods) as a function of model layers (all four tested models shown separately in individual panels). The difference in ANN predictivity between Controls and IwA was largest at the deeper (more IT-like) layers of the models instead of earlier (more V1, V2, and V4-like) layers. Errorbars denote bootstrap confidence intervals. Facial images shown in this figure are morphed and processed version of the original face images. These images have full re-use permission.

Journal: bioRxiv

Article Title: A computational probe into the behavioral and neural markers of atypical facial emotion processing in autism

doi: 10.1101/2021.03.24.436640

Figure Lengend Snippet: A. ANN models of the primate ventral stream (typically comprising V1, V2, V4 and IT like layers) can be trained to predict human facial emotion judgments. This involves building a regression model, i.e., determining the weights based on the model layer activations (as the predictor) to predict the image ground truth (“level of happiness”) on a set of training images, and then testing the predictions of this model on held-out images. B. An ANN model’s predicted psychometric curves (e.g., AlexNet, shown here) show the proportion of trials judged as “happy” as a function of facial emotion morph levels ranging from 0% happy (100% fearful; left) to 100% happy (0% fearful; right). This curve demonstrates that activations of ANN layers (layer ‘fc7’ that corresponds to the “model-IT” layer) can be successfully trained to predict facial emotions. C. Comparison of ANN’s image-level behavioral patterns with the behavior measured in Controls (x-axis) and IwA (y-axis). Four ANNs (with 5 models each generated from different layers of the ANNs are shown here in different colors. ANN predictions better match the behavior measured in the Controls compared to IwA. The correlation values (x and y axes) were corrected by the noise estimates per human population so that the differences are not due to differences in noise-levels in measurements across the IwA and Control subject pools. The dot size refers to the degree of discrepancy between ANN predictivity of Controls vs. IwA. D . A comparison of the ANN predictivity (results from AlexNet shown here) of behavior measured in IwA vs. Controls as function of model layers (convolutional (cnv) layers 1,3,4, and 5 and the fully connected layer 7, ‘fc7’ -- that approximately corresponds to the ventral stream cortical hierarchy). The difference between the ANN’s predictivity of behavior in IwA and Controls increases with depth and is referred to as Δ . E. Discriminability index (d’; ability to discriminate between image-level behavioral patterns measured in IwA vs. Controls ; see Methods) as a function of model layers (all four tested models shown separately in individual panels). The difference in ANN predictivity between Controls and IwA was largest at the deeper (more IT-like) layers of the models instead of earlier (more V1, V2, and V4-like) layers. Errorbars denote bootstrap confidence intervals. Facial images shown in this figure are morphed and processed version of the original face images. These images have full re-use permission.

Article Snippet: The model features, per layer, were extracted using the MATLAB command activations for AlexNet , VGGFace and EmotionNet in MATLAB-R 2020b.

Techniques: Generated

Studies using Deep Learning methods for detection of SCZ.

Journal: Frontiers in Human Neuroscience

Article Title: A systematic review of EEG based automated schizophrenia classification through machine learning and deep learning

doi: 10.3389/fnhum.2024.1347082

Figure Lengend Snippet: Studies using Deep Learning methods for detection of SCZ.

Article Snippet: , Short Term FFT, Continuous WT, and SPWVD , AlexNet, ResNet50, VGG16, and CNN , EEG , Kaggle SCZ dataset ( ) , 93.36.

Techniques:

Review of existing leaf disease methodologies with limitations.

Journal: Scientific Reports

Article Title: Bayesian optimized multimodal deep hybrid learning approach for tomato leaf disease classification

doi: 10.1038/s41598-024-72237-x

Figure Lengend Snippet: Review of existing leaf disease methodologies with limitations.

Article Snippet: Da Rocha et al. , BO DL , AlexNet, ResNet50, SqueezeNet , Lack of extensive hyperparameter optimization.

Techniques: Extraction, Modification

Comparison of the proposed approach with the latest approaches (tomato leaf 10 classes).

Journal: Scientific Reports

Article Title: Bayesian optimized multimodal deep hybrid learning approach for tomato leaf disease classification

doi: 10.1038/s41598-024-72237-x

Figure Lengend Snippet: Comparison of the proposed approach with the latest approaches (tomato leaf 10 classes).

Article Snippet: Da Rocha et al. , BO DL , AlexNet, ResNet50, SqueezeNet , Lack of extensive hyperparameter optimization.

Techniques: Comparison, Modification

Comparison of the suggested approach with recently established models for various crops.

Journal: Scientific Reports

Article Title: Bayesian optimized multimodal deep hybrid learning approach for tomato leaf disease classification

doi: 10.1038/s41598-024-72237-x

Figure Lengend Snippet: Comparison of the suggested approach with recently established models for various crops.

Article Snippet: Da Rocha et al. , BO DL , AlexNet, ResNet50, SqueezeNet , Lack of extensive hyperparameter optimization.

Techniques: Comparison

Comparison of the proposed model's training parameters with state-of-the-art models.

Journal: Scientific Reports

Article Title: Bayesian optimized multimodal deep hybrid learning approach for tomato leaf disease classification

doi: 10.1038/s41598-024-72237-x

Figure Lengend Snippet: Comparison of the proposed model's training parameters with state-of-the-art models.

Article Snippet: Da Rocha et al. , BO DL , AlexNet, ResNet50, SqueezeNet , Lack of extensive hyperparameter optimization.

Techniques: Comparison

Convolutional Neural network computation of saliency maps. A model of artificial vision was used to predict the most salient areas in an image and test whether attention maps derived from digit-tracking and eye-tracking exploration data are sensitive to the same features in visual scenes. a Convolutional Neural Network architecture. The first five convolutional layers of the AlexNet Network were used for features map extraction, and features are linearly combined in the last layer to produce saliency maps (e.g., Fig. ). b Hierarchical ordering of learned weights in the last layer of the convolutional neural network (CNN). X -axis denotes the 256 outputs while the Y -axis denotes the mean Pearson correlation between an individual channel and the measured saliency map. Each channel can be seen as a saliency map sensitive to a single feature class in the picture. A strong positive correlation coefficient indicates a highly attractive feature while a strong negative correlation indicates a highly avoided feature. c Correlation between the weights learned using eye and digit-tracking (Set A r pearson = 0.95, p < 1 × 10 −128 –Set B r pearson = 0.96, p < 1 × 10 −147 ). d High-level features are visualized by identifying in the picture database the most responsive pixels for the considered CNN channel. Example of the most attractive and the most avoided features corresponding to, respectively, the 3 most positively correlated and the 3 most negatively correlated channels of the CNN. Human explorations are particularly sensitive to eyes or eye-like areas, faces and highly contrasted details, while uniform areas with natural colors, textures, repetitive symbols are generally avoided.

Journal: Nature Communications

Article Title: Digit-tracking as a new tactile interface for visual perception analysis

doi: 10.1038/s41467-019-13285-0

Figure Lengend Snippet: Convolutional Neural network computation of saliency maps. A model of artificial vision was used to predict the most salient areas in an image and test whether attention maps derived from digit-tracking and eye-tracking exploration data are sensitive to the same features in visual scenes. a Convolutional Neural Network architecture. The first five convolutional layers of the AlexNet Network were used for features map extraction, and features are linearly combined in the last layer to produce saliency maps (e.g., Fig. ). b Hierarchical ordering of learned weights in the last layer of the convolutional neural network (CNN). X -axis denotes the 256 outputs while the Y -axis denotes the mean Pearson correlation between an individual channel and the measured saliency map. Each channel can be seen as a saliency map sensitive to a single feature class in the picture. A strong positive correlation coefficient indicates a highly attractive feature while a strong negative correlation indicates a highly avoided feature. c Correlation between the weights learned using eye and digit-tracking (Set A r pearson = 0.95, p < 1 × 10 −128 –Set B r pearson = 0.96, p < 1 × 10 −147 ). d High-level features are visualized by identifying in the picture database the most responsive pixels for the considered CNN channel. Example of the most attractive and the most avoided features corresponding to, respectively, the 3 most positively correlated and the 3 most negatively correlated channels of the CNN. Human explorations are particularly sensitive to eyes or eye-like areas, faces and highly contrasted details, while uniform areas with natural colors, textures, repetitive symbols are generally avoided.

Article Snippet: To this end, the Matlab (Mathwork Inc.) implementation of the AlexNet Network was selected.

Techniques: Derivative Assay, Extraction

Summary of related work.

Journal: Sensors (Basel, Switzerland)

Article Title: ResNet Based Deep Features and Random Forest Classifier for Diabetic Retinopathy Detection

doi: 10.3390/s21113883

Figure Lengend Snippet: Summary of related work.

Article Snippet: 2017, Mansour et al. [ ] , AlexNet with multiple optimization techniques , Accuracy of 95.26% with principal component analysis and 97.93% with FC7 features , EyePACS.

Techniques: Biomarker Discovery, Activation Assay